829 research outputs found

    Generating Correlated Ordinal Random Values

    Get PDF
    Ordinal variables appear in many field of statistical research. Since working with simulated data is an accepted technique to improve models or test results there is a need for providing correlated ordinal random values with certain properties like marginal distribution and correlation structure. The present paper describes two methods for generating such values: binary conversion and a mean mapping approach. The algorithms of the two methods are described and some examples of the outcomes are shown

    Having the Second Leg At Home - Advantage in the UEFA Champions League Knockout Phase?

    Get PDF
    In soccer knockout ties which are played in a two-legged format the team having the return match at home is usually seen as advantaged. For checking this common belief, we analyzed matches of the UEFA Champions League knockout phase since 1995. It is shown that the observed differences in frequencies of winning between teams first playing away and those which are first playing at home can be completely explained by their performances on the group stage and - more importantly - by the teams' general strength

    Mermin-Wagner fluctuations in 2D amorphous solids

    Full text link
    In a recent comment, M. Kosterlitz described how the discrepancy about the lack of broken translational symmetry in two dimensions - doubting the existence of 2D crystals - and the first computer simulations foretelling 2D crystals at least in tiny systems, motivated him and D. Thouless to investigate melting and suprafluidity in two dimensions [Jour. of Phys. Cond. Matt. \textbf{28}, 481001 (2016)]. The lack of broken symmetries proposed by D. Mermin and H. Wagner is caused by long wavelength density fluctuations. Those fluctuations do not only have structural impact but additionally a dynamical one: They cause the Lindemann criterion to fail in 2D and the mean squared displacement not to be limited. Comparing experimental data from 3D and 2D amorphous solids with 2D crystals we disentangle Mermin-Wagner fluctuations from glassy structural relaxations. Furthermore we can demonstrate with computer simulations the logarithmic increase of displacements predicted by Mermin and Wagner: periodicity is not a requirement for Mermin-Wagner fluctuations which conserve the homogeneity of space on long scales.Comment: 7 pages, 4 figure

    Profound effect of profiling platform and normalization strategy on detection of differentially expressed microRNAs

    Get PDF
    Adequate normalization minimizes the effects of systematic technical variations and is a prerequisite for getting meaningful biological changes. However, there is inconsistency about miRNA normalization performances and recommendations. Thus, we investigated the impact of seven different normalization methods (reference gene index, global geometric mean, quantile, invariant selection, loess, loessM, and generalized procrustes analysis) on intra- and inter-platform performance of two distinct and commonly used miRNA profiling platforms. We included data from miRNA profiling analyses derived from a hybridization-based platform (Agilent Technologies) and an RT-qPCR platform (Applied Biosystems). Furthermore, we validated a subset of miRNAs by individual RT-qPCR assays. Our analyses incorporated data from the effect of differentiation and tumor necrosis factor alpha treatment on primary human skeletal muscle cells and a murine skeletal muscle cell line. Distinct normalization methods differed in their impact on (i) standard deviations, (ii) the area under the receiver operating characteristic (ROC) curve, (iii) the similarity of differential expression. Loess, loessM, and quantile analysis were most effective in minimizing standard deviations on the Agilent and TLDA platform. Moreover, loess, loessM, invariant selection and generalized procrustes analysis increased the area under the ROC curve, a measure for the statistical performance of a test. The Jaccard index revealed that inter-platform concordance of differential expression tended to be increased by loess, loessM, quantile, and GPA normalization of AGL and TLDA data as well as RGI normalization of TLDA data. We recommend the application of loess, or loessM, and GPA normalization for miRNA Agilent arrays and qPCR cards as these normalization approaches showed to (i) effectively reduce standard deviations, (ii) increase sensitivity and accuracy of differential miRNA expression detection as well as (iii) increase inter-platform concordance. Results showed the successful adoption of loessM and generalized procrustes analysis to one-color miRNA profiling experiments

    Sensitivity of the polar boundary layer to transient phenomena

    Get PDF

    Close-to-process compensation of geometric deviations on implants based on optical measurement data

    Get PDF
    The production of implants is challenging due to their complex shapes, the filigree structures and the great regulatory effort. Therefore, a manufacturing cell with an integrated optical measurement was realized. The measurement system is used to determine the geometric deviations and to fulfill the documentation obligation. The measurement data are used to create a matrix containing the nominal coordinates and the error vector for compensation points at the relevant shapes. Based on this, the corresponding tool-path segments are isolated in the G-Code and compensated in order to reduce the geometric deviations. With this method, the deviation could be reduced by 85 %. However, it is pointed out, that the result of the compensation strongly depends on the quality of the optical measurement data

    Biclustering: Methods, Software and Application

    Get PDF
    Over the past 10 years, biclustering has become popular not only in the field of biological data analysis but also in other applications with high-dimensional two way datasets. This technique clusters both rows and columns simultaneously, as opposed to clustering only rows or only columns. Biclustering retrieves subgroups of objects that are similar in one subgroup of variables and different in the remaining variables. This dissertation focuses on improving and advancing biclustering methods. Since most existing methods are extremely sensitive to variations in parameters and data, we developed an ensemble method to overcome these limitations. It is possible to retrieve more stable and reliable bicluster in two ways: either by running algorithms with different parameter settings or by running them on sub- or bootstrap samples of the data and combining the results. To this end, we designed a software package containing a collection of bicluster algorithms for different clustering tasks and data scales, developed several new ways of visualizing bicluster solutions, and adapted traditional cluster validation indices (e.g. Jaccard index) for validating the bicluster framework. Finally, we applied biclustering to marketing data. Well-established algorithms were adjusted to slightly different data situations, and a new method specially adapted to ordinal data was developed. In order to test this method on artificial data, we generated correlated original random values. This dissertation introduces two methods for generating such values given a probability vector and a correlation structure. All the methods outlined in this dissertation are freely available in the R packages biclust and orddata. Numerous examples in this work illustrate how to use the methods and software.In den letzten 10 Jahren wurde das Biclustern vor allem auf dem Gebiet der biologischen Datenanalyse, jedoch auch in allen Bereichen mit hochdimensionalen Daten immer populärer. Unter Biclustering versteht man das simultane Clustern von 2-Wege-Daten, um Teilmengen von Objekten zu finden, die sich zu Teilmengen von Variablen ähnlich verhalten. Diese Arbeit beschäftigt sich mit der Weiterentwicklung und Optimierung von Biclusterverfahren. Neben der Entwicklung eines Softwarepaketes zur Berechnung, Aufarbeitung und graphischen Darstellung von Bicluster Ergebnissen wurde eine Ensemble Methode für Bicluster Algorithmen entwickelt. Da die meisten Algorithmen sehr anfällig auf kleine Veränderungen der Startparameter sind, können so robustere Ergebnisse erzielt werden. Die neue Methode schließt auch das Zusammenfügen von Bicluster Ergebnissen auf Subsample- und Bootstrap-Stichproben mit ein. Zur Validierung der Ergebnisse wurden auch bestehende Maße des traditionellen Clusterings (z.B. Jaccard Index) für das Biclustering adaptiert und neue graphische Mittel für die Interpretation der Ergebnisse entwickelt. Ein weiterer Teil der Arbeit beschäftigt sich mit der Anwendung von Bicluster Algorithmen auf Daten aus dem Marketing Bereich. Dazu mussten bestehende Algorithmen verändert und auch ein neuer Algorithmus speziell für ordinale Daten entwickelt werden. Um das Testen dieser Methoden auf künstlichen Daten zu ermöglichen, beinhaltet die Arbeit auch die Ausarbeitung eines Verfahrens zur Ziehung ordinaler Zufallszahlen mit vorgegebenen Wahrscheinlichkeiten und Korrelationsstruktur. Die in der Arbeit vorgestellten Methoden stehen durch die beiden R Pakete biclust und orddata allgemein zur Verfügung. Die Nutzbarkeit wird in der Arbeit durch zahlreiche Beispiele aufgezeigt
    corecore